Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Geoffrey Hinton"


25 mentions found


At his annual shareholder meeting in Omaha, Nebraska, the 93 year-old co-founder, chairman and CEO of Berkshire Hathaway issued a stark warning about the potential dangers of the technology. “We let a genie out of the bottle when we developed nuclear weapons,” he said Saturday. JPMorgan Chase, the world’s largest bank by market capitalization, is also exploring the potential of generative AI within its own ecosystem, Dimon said. Dozens of AI industry leaders, academics and even some celebrities have signed a statement warning of an “extinction” risk from AI. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.
Persons: New York CNN — Warren Buffett, Berkshire Hathaway, , Greg Abel, Buffett, , Abel, isn’t, Buffett Buffett, JPMorgan Chase, Jamie Dimon, Dimon, Jeffrey Sonnenfeld, Sonnenfeld, Doug McMillion, James Quincy, Sam Altman, Geoffrey Hinton Organizations: New, New York CNN, Berkshire, International Monetary Fund, Industries, Nvidia, Microsoft, scamming, JPMorgan, JPMorgan Chase, Software, Yale, Summit, CNN, Walmart, Xerox, Google Locations: New York, Omaha , Nebraska, Omaha, scamming
The police had used a facial-recognition AI program that identified her as the suspect based on an old mugshot. AdvertisementThe Detroit Police Department said that it restricts the use of the facial-recognition AI program to violent crimes and that matches it makes are just investigation leads. AdvertisementThe study also found that in a hypothetical murder trial, the AI models were more likely to propose the death penalty for an AAE speaker. A novel proposalOne reason for these failings is that the people and companies building AI aren't representative of the world that AI models are supposed to encapsulate. Bardlavens leads a team that aims to ensure equity is considered and baked into Adobe AI tools.
Persons: , Woodruff, who's, Ivan Land, Joy Buolamwini, Timnit Gebru, Valentin Hofmann, OpenAI's, AAE, Geoffrey Hinton, Christopher Lafayette, Udezue, OpenAI, Google's, John Pasmore, Latimer, Buolamwini, Timothy Bardlavens, Microsoft Bing, Microsoft Bardlavens, Bardlavens, Esther Dyson, Dyson, Arturo Villanueva, I'd, Villanueva, Alza, We're, Andrew Mahon, Alza's Organizations: Service, Detroit, Business, Court of Michigan, Detroit Police Department, Microsoft, IBM, Allen Institute, AI, Dartmouth College, Center for Education Statistics, Big Tech, Udezue, Meta, Google, Tech, Companies, Adobe Locations: That's, American, Africa, Southeast Asia, North America, Europe, Spanish
On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, announced that this year’s Turing Award will go to Avi Wigderson, an Israeli-born mathematician and theoretical computer scientist who specializes in randomness. Often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize. The award is named for Alan Turing, the British mathematician who helped create the foundations for modern computing in the mid-20th century. Other recent winners include Ed Catmull and Pat Hanrahan, who helped create the computer-generated imagery, or C.G.I., that drives modern movies and television, and the A.I. researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio, who nurtured the techniques that gave rise to chatbots like ChatGPT.
Persons: Turing, Avi Wigderson, Alan Turing, Ed Catmull, Pat Hanrahan, Geoffrey Hinton, Yann LeCun, Yoshua Bengio Organizations: Association for Computing Machinery Locations: Israeli, British
OpenAI and Meta are close to unveiling AI models that can reason and plan, the FT reported. AdvertisementOpenAI and Meta are reportedly preparing to release more advanced AI models that would be able to help problem-solve and take on more complex tasks. Representatives for Meta and OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. Getting AI models to reason and plan is an important step toward achieving artificial general intelligence (AGI), which both Meta and OpenAI have claimed to be aiming for. Elon Musk, a longtime AI skeptic, recently estimated that AI would outsmart humans within two years.
Persons: , Brad Lightcap, Joelle Pineau, OpenAI, John Carmack, Bengio, Geoffrey Hinton, Elon Musk, Musk Organizations: Meta, Service, Financial Times, Business
“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris said. First, Gladstone AI said, the most advanced AI systems could be weaponized to inflict potentially irreversible damage. Safety concernsHarris, the Gladstone AI executive, said the “unprecedented level of access” his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic. Some employees at AI companies are sharing similar concerns in private, according to Gladstone AI.
Persons: Gladstone AI, Jeremie Harris, ” Harris, Robyn Patterson, Joe Biden’s, ” “, ” Patterson, ” Gladstone AI’s, Harris, Gladstone, OpenAI, Geoffrey Hinton, , Hinton, Elon Musk, Lina Khan, , ” Gladstone, AGI, Organizations: New, New York CNN, US State Department, Gladstone, CNN, AIs, Google, Facebook, Gladstone AI’s, , Yale, Summit, Federal Trade, OpenAI, Gladstone AI, Nvidia Locations: New York, , United States, Hinton
The investment analyst team led by Gary Yu has a $140 price target and overweight rating on Baidu's U.S.-listed shares. "We believe the current AI cloud integration between Galaxy AI and Ernie is just the first step," Yu said. For all the interest in AI stocks, China markets this year are still grappling with worries about whether Beijing is doing enough to support economic growth. They have a price target of 160 yuan on Shanghai-listed shares of Cambricon — upside of 12% from Friday's levels. They have a price target of 380 yuan on Shanghai-listed Kingsoft, up more than 50% from Friday's levels.
Persons: Morgan Stanley, Gary Yu, Yu, Ernie chatbot, Ernie, Fawne Jiang, Jiang, Baidu, Alex Yao, Yao, Geoffrey Hinton, Cade Metz, Hinton, Metz, it's, Sinodata, Microsoft didn't, EPFR, Bernstein, monetization Organizations: Bloomberg, Baidu, U.S, Huawei, Galaxy, Benchmark, JPMorgan China, Mavericks, Google, Facebook, Microsoft, Shenzhen Stock Exchange, Shanghai, China Equity Funds, Nvidia Locations: China, U.S, Shenzhen, Beijing, Shanghai
Sam Altman, CEO of OpenAI, during a panel session at the World Economic Forum in Davos, Switzerland, on Jan. 18, 2024. Altman was temporarily booted from OpenAI in November in a shock move that laid bare concerns around the governance of the companies behind the most powerful AI systems. In a discussion at the World Economic Forum in Davos, Altman said his ouster was a "microcosm" of the stresses faced by OpenAI and other AI labs internally. "We're already seeing areas where AI has the ability to unlock our understanding ... where humans haven't been able to make that type of progress. Avoiding a 's--- show'Altman wasn't the only top tech executive asked about AI risks at Davos.
Persons: Sam Altman, Google's DeepMind, Salesforce, Altman, chatbot, We've, it's, Aidan Gomez, OpenAI, Gomez, CNBC's Arjun Kharpal, AGI, it'll, Lila Ibrahim, Ibrahim, CNBC's Kharpal, who've, haven't, Marc Benioff, Elon Musk, Steve Wozniak, Andrew Yang, Geoffrey Hinton, Hinton, Benioff Organizations: Economic, Bloomberg, Getty, Microsoft, Union, ABC News, ABC, OpenAI, CBS Locations: Davos, Switzerland, United States, Cohere, Hiroshima
Some companies expect AI to be an ultimate solution, but it doesn't work like that, he said. Generative AI programs, like ChatGPT, have been used by individuals for school, business, and more. For Lightcap, the most overhyped aspect of AI is that it can completely transform a business "in one fell swoop." Morgan Stanley predicts that AI tools will increase earnings for jobs across industries by at least $83 million by 2030. AdvertisementHowever, there are naysayers who fear generative AI will be used for more supervillain-type activities, like stealing artwork or replacing jobs.
Persons: OpenAI, , Brad Lightcap, they've, ChatGPT, Lightcap, there's, It's, Morgan Stanley, Geoffrey Hinton, it's Organizations: Service, CNBC
The New York Times list of "who's who" in AI has been slammed for featuring zero women. "Godmother of AI" Fei-Fei Li criticized the list, writing, "It's not about me, but all of us in AI." AdvertisementThe New York Times' profile of "who's who" in AI, published Sunday, has drawn criticism for featuring zero women. "You literally erased all the heavy hitting women of AI and but included people who are more 'influencers,'" wrote Daneshjou. AdvertisementThe New York Times did not immediately respond to a request for comment from Business Insider, sent outside regular business hours.
Persons: Fei, Fei Li, , Kara Swisher, Li, It’s, recup, asha, Dane, Wale, ari, Hass, Hoff, lon Musk Organizations: New York Times, Service, ust, ctu, rit, emi Locations: usk
Foundation models like the one built by Microsoft (MSFT.O)-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks. In a meeting of the countries' economy ministers on Oct. 30 in Rome, France persuaded Italy and Germany to support a proposal, sources told Reuters. Until then, negotiations had gone smoothly, with lawmakers making compromises across several other conflict areas such as regulating high-risk AI, sources said. France-based AI company Mistral and Germany's Aleph Alpha have criticised the tiered approach to regulating foundation models, winning support from their respective countries. Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions, sources told Reuters.
Persons: Carlos Barria, Thierry Breton, Geoffrey Hinton, Alpha, Mistral, Mark Brakel, Supantha Mukherjee, Josephine Mason, Alexander Smith Organizations: Technology, Intelligence, REUTERS, Rights, Reuters, Foundation, Microsoft, European Commission, Mistral, Lawmakers, Life Institute, Thomson Locations: San Francisco, California, U.S, Rights STOCKHOLM, BRUSSELS, LONDON, France, Germany, Italy, Rome, Spain, Belgium, Stockholm
He announced plans for a new national strategy for AI development to counter the West. The United Kingdom has eight. Similarly, close to 300 authors of these systems come from the United States. Another 140 are from the United Kingdom. Geoffrey Hinton, the British-Canadian AI researcher named a "godfather of AI," for instance, has said he's worried about "bad actors" like Putin using the AI tools he's creating .
Persons: Putin, , Vladimir Putin, they're, Geoffrey Hinton Organizations: Russia, Service, Artificial Intelligence, supercomputing, Stanford's Institute for, Intelligence Locations: Moscow, Russia, United States, United Kingdom, States, British, Canadian
As Open AI employees celebrated the return of CEO Sam Altman with a five-alarm office party , OpenAI software engineer Steven Heidel was busy publicly rebuffing overtures from Salesforce CEO Marc Benioff. Heidel was one of more than 700 OpenAI employees who's threatened exodus halted a would-be mutiny at one of Silicon Valley's most important AI companies. He was previously a scientist at Facebook AI Research and worked as a member of Google Brain under supervision of Prof. Geoffrey Hinton and Ilya Sutskever. Alec Radford: Radford was hired in 2016 from a small AI company he founded in his dorm room. Tao Xu : technical staff, worked on GPT4 and WhisperChristine McLeavey : technical staff, with contributions to music-related productsChristina Kim : technical staffChristopher Hesse : technical staffHeewoo Jun : technical staff, researchAlex Nichol : technical staff, researchWilliam Fedus: technical staff, researchIlge Akkaya: technical staff, researchVineet Kosaraju : technical staff, researchHenrique Ponde de Oliveira Pinto : technical staffAditya Ramesh : technical staff, developed DALL-E and DALL-E 2Prafulla Dhariwal : research scientistHunter Lightman : technical staffHarrison Edwards : research scientistYura Burda : machine language researcherTyna Eloundou : technical staff, researchPamela Mishkin : researcherCasey Chu : researcherDavid Dohan : technical staff, researchAidan Clark : researcherRaul Puri : research scientistLeo Gao : technical staff, researchYang Song : technical staff, researchGiambattista ParascandoloTodor Markov : Machine learning researcherNick Ryder : technical staff
Persons: Sam Altman, Steven Heidel, Marc Benioff, Heidel, Altman, Mira Murati, Murati, Brad Lightcap, Lightcap, Jason Kwon, Kwon, Wojciech Zaremba, Geoffrey Hinton, Ilya Sutskever, Alec Radford, Radford, OpenAI, Peter Welinder, He's, Github Copilot, Anna Makanju, Andrej Karpathy, OpenAI's, Michael Petrov, Petrov, Greg [ Brockman, Miles Brundage, Brundage, John Schulman OpenAI, Srinivas Narayanan, Scott Grey, Grey, Bob McGrew, Research Che Chang, Lillian Weng, Safety Systems Mark Chen, Frontiers Research Barret Zoph, Peter Deng, Jan Leike Evan Morikawa Steven Heidel Jong Wook Kim, Tao Xu, Christine McLeavey, Christina Kim, Christopher Hesse, Heewoo, Alex Nichol, William Fedus, Henrique Ponde de Oliveira Pinto, Aditya Ramesh, Hunter Lightman, Harrison Edwards, Yura, Tyna, Pamela Mishkin, Casey Chu, David Dohan, Aidan Clark, Raul Puri, Leo Gao, Yang, Giambattista Parascandolo Todor Markov, Nick Ryder Organizations: Business, BI, OpenAI, Khosla Ventures, Facebook, Research, Google, Tesla, U.S . Department of Energy, Oxford University, Safety Systems, Frontiers Research Locations: Albania, Canada, OpenAI
of OpenAI, the leader in commercializing generative A.I. By Monday, he had not only been fired by his board — he had also joined Microsoft, the start-up’s biggest backer. A recap:OpenAI’s board fired Altman for not being “consistently candid.” Greg Brockman, another co-founder, was stripped of his chairman title and quit. Talks to bring Altman back broke down, with OpenAI’s board eventually naming Emmett Shear, the former C.E.O. (Some OpenAI employees workers wrote on X that “OpenAI is nothing without its people,” posts that Altman liked.)
Persons: Sam Altman, Altman, Greg Brockman, OpenAI —, , Emmett Shear, Mira Murati, Brockman, Ilya Sutskever, Elon Musk, Geoffrey Hinton Organizations: Microsoft, up’s, OpenAI Locations: OpenAI, ChatGPT
However, overemphasizing the dangers of AI risks paralyzing debate at a pivotal moment. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . Advertisement"I'm not scared of A.I.," LeCun told the magazine. While Hinton and Meta's chief AI scientist LeCun have butted heads, fellow collaborator and third AI godfather Yoshua Bengio has stressed that this unknown is the real issue.
Persons: what's, Geoffrey Hinton, , Hinton, Yan LeCun, Turing, LeCun, Yoshua Bengio, Yann, Joshua Rothman, it's Organizations: Service, Big Tech, Google, Yorker Locations: Hinton, Canadian
AI godfather Yoshua Bengio says the risks of AI should not be underplayed. His remarks come after Meta's Yann LeCun accused Bengio and AI founders of "fear-mongering." AdvertisementAdvertisementClaims by Meta's chief AI scientist, Yann LeCun, that AI won't wipe out humanity are dangerous and wrong according to one of his fellow AI godfathers. AdvertisementAdvertisement"If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote. "Existential risk is one problem but the concentration of power, in my opinion, is the number two problem," he said.
Persons: Yoshua Bengio, Bengio, Meta's Yann LeCun, , Yann LeCun, Yann, LeCun, overstating, Andrew Ng, Geoffrey Hinton, Hinton Organizations: Service, Bell Labs, Google Locations: Bengio
Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementSome of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech. Andew Ng , a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit. — Geoffrey Hinton (@geoffreyhinton) October 31, 2023Meta's chief AI scientist Yann LeCun , also known as an AI godfather for his work with Hinton, sided with Ng.
Persons: Andrew Ng, OpenAI's Sam Altman, , Andew Ng, Ng, It's, Elon Musk, Sam Altman, DeepMind, Demis Hassabis, Googler Geoffrey Hinton, Yoshua, godfathers, — Geoffrey Hinton, Yann LeCun, Hinton, LeCun, Meredith Whittaker, Whittaker Organizations: Google, Big Tech, AI's, Service, Australian Financial Locations: Hinton, British, Canadian, @geoffreyhinton
LONDON, Oct 31 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit this week to examine the risks of the fast-growing technology and kickstart an international dialogue on regulation of it. The aim of the summit is to start a global conversation on the future regulation of AI. Currently there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. A recent Financial Times report said Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC). When Sunak announced the summit in June, some questioned how well-equipped Britain was to lead a global initiative on AI regulation.
Persons: Olaf Scholz, Justin Trudeau –, Kamala Harris, Ursula von der Leyen, Wu Zhaohui, Antonio Guterres, James, Demis Hassabis, Sam Altman, OpenAI, Elon Musk, , Stuart Russell, Geoffrey Hinton, Alan Turing, Rishi Sunak, Sunak, Joe Biden, , Martin Coulter, Josephine Mason, Christina Fincher Organizations: Bletchley, WHO, Canadian, European, United Nations, Google, Microsoft, HK, Billionaire, Alan, Alan Turing Institute, Life, European Union, British, EU, UN, Thomson Locations: Britain, England, Beijing, British, Alibaba, United States, China, U.S
Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity. The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI’s capabilities. That makes frontier AI systems “dangerous because they’re not perfectly knowledgeable,” Clune said.
Persons: , Rishi Sunak, It’s, Kamala Harris, Ursula von der Leyen, Google’s, Alan Turing, Sunak, , Jeff Clune, Clune, Elon, Sam Altman, He’s, Joe Biden, Geoffrey Hinton, Yoshua, ” Clune, , it's, Francine Bennett, Ada Lovelace, Deb Raji, ” Raji, it’s, shouldn’t, Raji, DeepMind, Anthropic, Dario Amodei, Jack Clark, , Carsten Jung, Jill Lawless Organizations: British, U.S, European, University of British, AI Safety, European Union, Clune, Ada, Ada Lovelace Institute, House, University of California, ” Tech, Microsoft, Institute for Public Policy Research, Regulators, Associated Press Locations: Bletchley, University of British Columbia, State, EU, Brussels, China, U.S, Beijing, London, Berkeley
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues. "It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken," he said. Since the launch of OpenAI's generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems. "There are more regulations on sandwich shops than there are on AI companies."
Persons: Dado Ruvic, Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, Yuval Noah Harari, Elon Musk, Stuart Russell, Supantha Mukherjee, Miral Organizations: REUTERS, Rights, Safety, European, Elon, Thomson Locations: Rights STOCKHOLM, London, European Union, British, Stockholm
REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing RightsLONDON, Oct 18 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit next month, aiming to carve out a role following Brexit as an arbiter between the United States, China, and the European Union in a key tech sector. The Nov. 1-2 summit will focus heavily on the existential threat some lawmakers, including Britain's Prime Minister Rishi Sunak, fear AI poses. Sunak, who wants the UK to become a hub for AI safety, has warned the technology could be used by criminals and terrorists to create weapons of mass destruction. Critics question why Britain has appointed itself the centre of AI safety. "We are now reflecting on potential EU participation," a spokesperson told Reuters.
Persons: Dado Ruvic, Rishi Sunak, Sunak, Alan Turing, Kamala Harris, Demis, Matt Clifford, Clifford, we're, Stephanie Hare, Elon Musk, Geoffrey Hinton, Britain, OpenAI, Marc Warner, it's, Vera Jourova, Brando Benifei, Dragos Tudorache, Benifei, Jeremy Hunt, Martin Coulter, Matt Scuffham, Mark Potter Organizations: REUTERS, European Union, Britain's, EU, Bletchley, Google, San, Reuters, China . Finance, Politico, Thomson Locations: Britain, United States, China, England, British, France, Germany, London, U.S, San Francisco, Beijing, Europe
OpenAI has quietly changed the core values it displays on its career page. AdvertisementAdvertisementOpenAI has quietly changed its core values on the company's careers page. OpenAI's new core values are now "AGI focus," "intense and scrappy," "scale," "make something people love," and "team spirit," per the company's careers page. The initial set of core values had been used since at least January 2022, per the Internet Archive. AdvertisementAdvertisementWhile some older core values seem to have been folded into new ones, others lack a clear replacement.
Persons: OpenAI, , , Altman, Geoffrey Hinton, Sam Altman, Elon Musk, Musk Organizations: Service, New Yorker, Microsoft
Geoffrey Hinton, the computer scientist known as a "Godfather of AI," says artificial intelligence-enhanced machines "might take over" if humans aren't careful. "One of the ways these systems might escape control is by writing their own computer code to modify themselves," said Hinton. Humans, including scientists like himself who helped build today's AI systems, still don't fully understand how the technology works and evolves, Hinton said. As Hinton described it, scientists design algorithms for AI systems to pull information from data sets, like the internet. Pichai and other AI experts don't seem nearly as concerned as Hinton about humans losing control.
Persons: Geoffrey Hinton, Hinton, Sundar Pichai, Yann LeCun Organizations: CBS, Google Locations: Hinton
George Hinton voiced some alarming concerns about AI in a "60 Minutes" interview. The AI "godfather" says the tech is learning better than humans — and has the potential to do bad. AdvertisementAdvertisementAll that AI is missing now, Hinton said, is the self-awareness to know how to use its intelligence to manipulate humans. They'll know how to do it"And of course, there's the concern of using AI to replace people in jobs, generate fake news, and unintended bias going undetected. He recently expressed regret for his role in advancing AI, but said on "60 Minutes" he had no regrets for the good it can do.
Persons: George Hinton, Hinton, , Geoffrey Hinton, Turing, they'll, Machiavelli, Hinton didn't, godfathers Organizations: Service, Google
DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review. "You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable." And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event.
Persons: DeepMind's Mustafa Suleyman, Mustafa Suleyman, Suleyman, there's, Sam Altman, Elon Musk, Mark Zuckerberg, — Suleyman, Pi, Hassabis, Satya Nadella, Geoffrey Hinton, Yoshua Organizations: MIT Tech, Service, MIT Technology, AIs, Life Institute Locations: Wall, Silicon, Washington
A newly released biography on Musk details how he justified poaching a Google scientist to then-CEO Larry Page. "And I was like, 'Larry, if you just hadn't been so cavalier about AI safety then it wouldn't really be necessary to have some countervailing force," Musk said. Sutskever joined Google's AI unit, Google Brain, in 2013 along with Geoffrey Hinton — also known as the "godfather of AI." "And I was like, 'Larry, if you just hadn't been so cavalier about AI safety then it wouldn't really be necessary to have some countervailing force," Musk added. When Musk started his own AI startup — xAI — in July, he again poached AI experts from Google and OpenAI.
Persons: Larry Page, Larry, Musk, Elon Musk, Sam Altman, OpenAI, Walter Isaacson's, Altman, Ilya Sutskever, Sutskever, Geoffrey Hinton —, Ilya, Isaacson, Organizations: Service, Google Locations: Wall, Silicon
Total: 25